A family of second-order methods for convex ℓ1-regularized optimization

نویسندگان

  • Richard H. Byrd
  • Gillian M. Chin
  • Jorge Nocedal
  • Figen Öztoprak
چکیده

This paper is concerned with the minimization of an objective that is the sum of a convex function f and an `1 regularization term. Our interest is in methods that incorporate second-order information about the function f to accelerate convergence. We describe a semi-smooth Newton framework that can be used to generate a variety of second-order methods, including block active-set methods, orthant-based methods and a second-order iterative soft-thresholding method. We also propose a new active set method that performs multiple changes in the active manifold estimate, and incorporates a novel mechanism for correcting estimates, when needed. This corrective mechanism is also evaluated in an orthant-based method. Numerical tests comparing the performance of several second-order methods are presented.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variance-Reduced Cubic Regularized Newton Method

We propose a stochastic variance-reduced cubic regularized Newton method for non-convex optimization. At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method. We show that our algorithm is guaranteed to converge to an ( , √ )-approximately local minimum within Õ(n/ ) second-order oracl...

متن کامل

A coordinate gradient descent method for ℓ1-regularized convex minimization

In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing `1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated a...

متن کامل

A Generic Path Algorithm for Regularized Statistical Estimation.

Regularization is widely used in statistics and machine learning to prevent overfitting and gear solution towards prior information. In general, a regularized estimation problem minimizes the sum of a loss function and a penalty term. The penalty term is usually weighted by a tuning parameter and encourages certain constraints on the parameters to be estimated. Particular choices of constraints...

متن کامل

Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including ...

متن کامل

CVaR Reduced Fuzzy Variables and Their Second Order Moments

Based on credibilistic value-at-risk (CVaR) of regularfuzzy variable, we introduce a new CVaR reduction method fortype-2 fuzzy variables. The reduced fuzzy variables arecharacterized by parametric possibility distributions. We establishsome useful analytical expressions for mean values and secondorder moments of common reduced fuzzy variables. The convex properties of second order moments with ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Math. Program.

دوره 159  شماره 

صفحات  -

تاریخ انتشار 2016